63 research outputs found

    A neural surveyor to map touch on the body

    Get PDF
    Perhaps the most recognizable sensory map in all of neuroscience is the somatosensory homunculus. Although it seems straightforward, this simple representation belies the complex link between an activation in a somatotopic map and the associated touch location on the body. Any isolated activation is spatially ambiguous without a neural decoder that can read its position within the entire map, but how this is computed by neural networks is unknown. We propose that the somatosensory system implements multilateration, a common computation used by surveying and global positioning systems to localize objects. Specifically, to decode touch location on the body, multilateration estimates the relative distance between the afferent input and the boundaries of a body part (e.g., the joints of a limb). We show that a simple feedforward neural network, which captures several fundamental receptive field properties of cortical somatosensory neurons, can implement a Bayes-optimal multilateral computation. Simulations demonstrated that this decoder produced a pattern of localization variability between two boundaries that was unique to multilateration. Finally, we identify this computational signature of multilateration in actual psychophysical experiments, suggesting that it is a candidate computational mechanism underlying tactile localization

    Evidence for the predictive remapping of visual attention

    Get PDF
    When attending an object in visual space, perception of the object remains stable despite frequent eye movements. It is assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Remapping is predictive when it starts before the actual eye movement. Until now, most evidence for predictive remapping has been obtained in single cell studies involving monkeys. Here, we report that predictive remapping affects visual attention prior to an eye movement. Immediately following a saccade, we show that attention has partly shifted with the saccade (Experiment 1). Importantly, we show that remapping is predictive and affects the locus of attention prior to saccade execution (Experiments 2 and 3): before the saccade was executed, there was attentional facilitation at the location which, after the saccade, would retinotopically match the attended location

    Changing Human Visual Field Organization from Early Visual to Extra-Occipital Cortex

    Get PDF
    BACKGROUND: The early visual areas have a clear topographic organization, such that adjacent parts of the cortical surface represent distinct yet adjacent parts of the contralateral visual field. We examined whether cortical regions outside occipital cortex show a similar organization. METHODOLOGY/PRINCIPAL FINDINGS: The BOLD responses to discrete visual field locations that varied in both polar angle and eccentricity were measured using two different tasks. As described previously, numerous occipital regions are both selective for the contralateral visual field and show topographic organization within that field. Extra-occipital regions are also selective for the contralateral visual field, but possess little (or no) topographic organization. A regional analysis demonstrates that this weak topography is not due to increased receptive field size in extra-occipital areas. CONCLUSIONS/SIGNIFICANCE: A number of extra-occipital areas are identified that are sensitive to visual field location. Neurons in these areas corresponding to different locations in the contralateral visual field do not demonstrate any regular or robust topographic organization, but appear instead to be intermixed on the cortical surface. This suggests a shift from processing that is predominately local in visual space, in occipital areas, to global, in extra-occipital areas. Global processing fits with a role for these extra-occipital areas in selecting a spatial locus for attention and/or eye-movements

    What is ‘anti’ about anti-reaches? Reference frames selectively affect reaction times and endpoint variability

    Get PDF
    Reach movement planning involves the representation of spatial target information in different reference frames. Neurons at parietal and premotor stages of the cortical sensorimotor system represent target information in eye- or hand-centered reference frames, respectively. How the different neuronal representations affect behavioral parameters of motor planning and control, i.e. which stage of neural representation is relevant for which aspect of behavior, is not obvious from the physiology. Here, we test with a behavioral experiment if different kinematic movement parameters are affected to a different degree by either an eye- or hand-reference frame. We used a generalized anti-reach task to test the influence of stimulus-response compatibility (SRC) in eye- and hand-reference frames on reach reaction times, movement times, and endpoint variability. While in a standard anti-reach task, the SRC is identical in the eye- and hand-reference frames, we could separate SRC for the two reference frames. We found that reaction times were influenced by the SRC in eye- and hand-reference frame. In contrast, movement times were only influenced by the SRC in hand-reference frame, and endpoint variability was only influenced by the SRC in eye-reference frame. Since movement time and endpoint variability are the result of planning and control processes, while reaction times are consequences of only the planning process, we suggest that SRC effects on reaction times are highly suited to investigate reference frames of movement planning, and that eye- and hand-reference frames have distinct effects on different phases of motor action and different kinematic movement parameters

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets

    Gaze fixation improves the stability of expert juggling

    Get PDF
    Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain

    Pre-Stimulus Activity Predicts the Winner of Top-Down vs. Bottom-Up Attentional Selection

    Get PDF
    Our ability to process visual information is fundamentally limited. This leads to competition between sensory information that is relevant for top-down goals and sensory information that is perceptually salient, but task-irrelevant. The aim of the present study was to identify, from EEG recordings, pre-stimulus and pre-saccadic neural activity that could predict whether top-down or bottom-up processes would win the competition for attention on a trial-by-trial basis. We employed a visual search paradigm in which a lateralized low contrast target appeared alone, or with a low (i.e., non-salient) or high contrast (i.e., salient) distractor. Trials with a salient distractor were of primary interest due to the strong competition between top-down knowledge and bottom-up attentional capture. Our results demonstrated that 1) in the 1-sec pre-stimulus interval, frontal alpha (8–12 Hz) activity was higher on trials where the salient distractor captured attention and the first saccade (bottom-up win); and 2) there was a transient pre-saccadic increase in posterior-parietal alpha (7–8 Hz) activity on trials where the first saccade went to the target (top-down win). We propose that the high frontal alpha reflects a disengagement of attentional control whereas the transient posterior alpha time-locked to the saccade indicates sensory inhibition of the salient distractor and suppression of bottom-up oculomotor capture

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

    Get PDF
    The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent— that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features)
    corecore